实时动态环境感知对于拥挤空间的自动机器人至关重要。尽管流行的基于体素的映射方法可以有效地用任意复杂的形状代表3D障碍,但它们几乎无法区分静态和动态障碍,从而导致避免障碍物的性能有限。尽管在自动驾驶中存在大量基于学习的动态障碍检测算法,但四轮驱动器的有限计算资源无法使用这些方法实现实时性能。为了解决这些问题,我们为使用RGB-D摄像机提出了一个实时动态障碍物跟踪和映射系统,以避免四肢障碍物。拟议的系统首先利用带有占用体素图的深度图像来生成潜在的动态障碍区域作为建议。通过障碍区域建议,Kalman滤波器和我们的连续性过滤器将应用于跟踪每个动态障碍物。最后,使用追踪动态障碍的状态基于马尔可夫链提出了环境感知的轨迹预测方法。我们使用定制的四轮驱动器和导航计划者实施了建议的系统。仿真和物理实验表明,我们的方法可以成功地跟踪和代表动态环境中的障碍,并安全地避免障碍。
translated by 谷歌翻译
导航动态环境要求机器人生成无碰撞的轨迹,并积极避免移动障碍。大多数以前的作品都基于一个单个地图表示形式(例如几何,占用率或ESDF地图)设计路径计划算法。尽管他们在静态环境中表现出成功,但由于地图表示的限制,这些方法无法同时可靠地处理静态和动态障碍。为了解决该问题,本文提出了一种利用机器人在板载视觉的基于梯度的B-Spline轨迹优化算法。深度视觉使机器人能够基于体素图以几何形式跟踪和表示动态对象。拟议的优化首先采用基于圆的指南算法,以近似避免静态障碍的成本和梯度。然后,使用视觉检测的移动对象,我们的后水平距离场同时用于防止动态碰撞。最后,采用迭代重新指导策略来生成无碰撞轨迹。仿真和物理实验证明,我们的方法可以实时运行以安全地导航动态环境。
translated by 谷歌翻译
The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations: (1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a "canvas" for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative, evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON's perceived realism is better by a large margin. Code and models are available for research purposes at https://xiuyuliang.cn/econ
translated by 谷歌翻译
With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译
Achieving multiple genres and long-term choreography sequences from given music is a challenging task, due to the lack of a multi-genre dataset. To tackle this problem,we propose a Multi Art Genre Intelligent Choreography Dataset (MagicDance). The data of MagicDance is captured from professional dancers assisted by motion capture technicians. It has a total of 8 hours 3D motioncapture human dances with paired music, and 16 different dance genres. To the best of our knowledge, MagicDance is the 3D dance dataset with the most genres. In addition, we find that the existing two types of methods (generation-based method and synthesis-based method) can only satisfy one of the diversity and duration, but they can complement to some extent. Based on this observation, we also propose a generation-synthesis choreography network (MagicNet), which cascades a Diffusion-based 3D Diverse Dance fragments Generation Network (3DGNet) and a Genre&Coherent aware Retrieval Module (GCRM). The former can generate various dance fragments from only one music clip. The latter is utilized to select the best dance fragment generated by 3DGNet and switch them into a complete dance according to the genre and coherent matching score. Quantitative and qualitative experiments demonstrate the quality of MagicDance, and the state-of-the-art performance of MagicNet.
translated by 谷歌翻译
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
translated by 谷歌翻译
在对地下地震成像的研究中,求解声波方程是现有模型中的关键成分。随着深度学习的发展,神经网络通过学习输入和方程解决方案之间的映射,特别是波动方程式,将神经网络应用于数值求解部分微分方程,因为如果要花很多时间,传统方法可能会很耗时解决了。以前专注于通过神经网络解决波动方程的工作考虑单个速度模型或多个简单速度模型,这在实践中受到限制。因此,受操作员学习的构想的启发,这项工作利用了傅立叶神经操作员(FNO)在可变速度模型的背景下有效地学习频域地震波场。此外,我们提出了一个与傅立叶神经操作员(PFNO)并行的新框架,以有效地训练基于FNO的求解器,给定多个源位置和频率。数值实验证明了OpenFWI数据集中使用复杂速度模型的FNO和PFNO的高精度。此外,跨数据集泛化测试验证了PFNO适应过分速度模型的。同样,在标签中存在随机噪声的情况下,PFNO具有强大的性能。最后,与传统的有限差异方法相比,PFNO在大规模测试数据集上接受了更高的计算效率。上述优势赋予了基于FNO的求解器的潜力,可以为地震波研究建立强大的模型。
translated by 谷歌翻译
3D点云可以灵活地表示连续表面,可用于各种应用;但是,缺乏结构信息使点云识别具有挑战性。最近的边缘感知方法主要使用边缘信息作为描述局部结构以促进学习的额外功能。尽管这些方法表明,将边缘纳入网络设计是有益的,但它们通常缺乏解释性,使用户想知道边缘如何有所帮助。为了阐明这一问题,在这项研究中,我们提出了以可解释方式处理边缘的扩散单元(DU),同时提供了不错的改进。我们的方法可以通过三种方式解释。首先,我们从理论上表明,DU学会了执行任务呈纤维边缘的增强和抑制作用。其次,我们通过实验观察并验证边缘增强和抑制行为。第三,我们从经验上证明,这种行为有助于提高绩效。在具有挑战性的基准上进行的广泛实验验证了DU在可解释性和绩效增长方面的优势。具体而言,我们的方法使用S3DIS使用Shapenet零件和场景分割来实现对象零件分割的最新性能。我们的源代码将在https://github.com/martianxiu/diffusionunit上发布。
translated by 谷歌翻译